Introduction to Open Data Science - Course Project

Chapter 1. About the project

I found about the course from Kimmo. I am eager to learn but I know there is much to learn for me as this is something I have no previous experience in. I am expecting to learn to use R at least in a level that I will understand where to searh for help and know how to communicate with statisticians about the results of my studies. If I have enough time, I might even learn to really analyze data by myself as well.

The way the book R for health data science has been written is really nice. At first I had problems understanding what I should do with the Exercise set 1, but then I got help during the computer clinic session from assistant teachers. I really liked the data visualization parts and I started to understand the basic functions. I had no previous experience in the markdown language so I did not understand what to do with this file either. So I needed to ask for help from my partner and I got advice to use https://markdownlivepreview.com/ and now I understand how this works. But now that I read the instructions further, I noticed the instuctions were available there. I was just too fast :) I had some problems in saving my work done in R studio to github, I hope I will learn this next week.

This is a link to my Repository.

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Mon Dec 12 21:35:41 2022"

The text continues here.


date()
## [1] "Mon Dec 12 21:35:41 2022"

Chapter 2. Performing and interpreting regression analysis

Data wrangling exercise

I have done the data wrangling exercise - I had problems but I got help. With the existing instructions I would not have managed it. But I got something done. When doing the analysis exercise I noticed there was something wrong with my data or the way I had saved it or the way I tried to read it to rstudio so I used the data from the link given in the instructions. I compared the data with my data and it looks the same, but something is going on.

Reading the raw data and exploring it

This is data from [ASSIST 2014 International survey of Approaches to Learning by Vehkalahti] (https://www.mv.helsinki.fi/home/kvehkala/JYTmooc/JYTOPKYS2-meta.txt)

Here I have set the working directory and read the csv file (noticed later it did not work) and later read the pre-exsting table

library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.2 ──
## ✔ ggplot2 3.3.6      ✔ purrr   0.3.5 
## ✔ tibble  3.1.8      ✔ dplyr   1.0.10
## ✔ tidyr   1.2.1      ✔ stringr 1.4.1 
## ✔ readr   2.1.3      ✔ forcats 0.5.2 
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
setwd("C:/Users/riikk/Documents/Open data science/IODS-project/Data")
#students2014 = read_csv("learning2014.csv") This did not work so I used # later
students2014 <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/learning2014.txt",
                           sep = ",", header = TRUE)
dim(students2014)
## [1] 166   7
str(students2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

Above I have also explored the dimensions of the data. Dimensions: 166_7. Structure 166 obs. of 7 variables

The data contains seven variables:

  • Age Age (in years) derived from the date of birth
  • Attitude Global attitude toward statistics
  • Points Exam points
  • gender Gender: M (Male), F (Female), Male = 1 Female = 2
  • Deep Deep approach (Scale: min = 1, max = 5)
  • Surf Surface approach (Scale: min = 1, max = 5)
  • Stra Strategic approach (Scale: min = 1, max = 5)

Summary and graphical overview of the data

Next I have summmarized the data both in text and visually. EG. The age of the participants varied between 17 and 55 (mean 25,51), the exam points varied between 7 and 33 (mean 22.72).

summary(students2014)
##     gender               age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :3.200   Median :3.667  
##                     Mean   :25.51   Mean   :3.143   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :5.000   Max.   :4.917  
##       stra            surf           points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00
library(GGally)
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(ggplot2)
p <- ggpairs(students2014, mapping = aes(col = gender), lower = list(combo = wrap("facethist", bins = 20)))

graphical overview

When exploring the data visually, it looks like there were less male participants, participating women were a bit younger, women seem to have worse attitudes. There seems to be statistically significant correlations between attitude and points, surf and attitude in males, surf and deep in males. In attitudes there seems to be a difference in distributions of male vs. female participants. There seems to be a lot of outliers in age, and also some in deep and attitude (male).

Fitted model

my_model <- lm(points ~ attitude + stra + surf, data = students2014)
par(mar=c(1,1,1,1))
summary(my_model)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = students2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

I chose three variables (attitude, stra, surf) and fit a regression model where exam points was the target (dependent, outcome) variable. This is multivariable linear regression. With this we explore the relationship between the exploratory variables (attitude, stra and surf) and the exam points.

Summary of the fitted model: Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 11.0171 3.6837 2.991 0.00322 ** attitude 3.3952 0.5741 5.913 1.93e-08 *** stra 0.8531 0.5416 1.575 0.11716
surf -0.5861 0.8014 -0.731 0.46563
— Signif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05 ‘.’ 0.1 ‘ ’ 1

summary of the fitted model

Only attitudes had a statistically significant relationship with points, so I removed the other variables and ran the model without them so this was univariable linear regression.

my_model <- lm(points ~ attitude, data = students2014)
summary(my_model)
## 
## Call:
## lm(formula = points ~ attitude, data = students2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.6372     1.8303   6.358 1.95e-09 ***
## attitude      3.5255     0.5674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

“R-squared is another measure of how close the data are to the fitted line. 0.0 indicates that none of the variability in the dependent is explained by the explanatory (no relationship between data points and fitted line) and 1.0 indicates that the model explains all of the variability in the dependent” In this case the multiple R-squared is 0.1906, meaning that only a small part of the variability is explained by attitude.

Last I have produced diagnostic plots: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage. Quantile-quantile is a graphical method for comparing the distribution of our own data to a theoretical distribution, such as the normal distribution. A Q-Q plot simply plots the quantiles for our data against the theoretical quantiles for a particular distribution (the default shown below is the normal distribution). If our data follow that distribution (e.g., normal), then our data points fall on the theoretical straight line.

plot (my_model, which = c(1,2,5))

residuals vs fitted

normal q-q

residuals vs leverage

The first one looks quite nice to me, as the distance of the observations from the fitted line is about the same on the left as on the right. In the second pic at the right end of the image the residuals diverge from the straight line, so this is not normally distributed totally. As I don’t have much experience in this, I don’t know whether this is a problem or not as most of this fits the line nicely.

I don’t know why this prints the last graphs twice.

And then I ran out of time :D


date()
## [1] "Mon Dec 12 21:35:43 2022"

Chapter 3. Analysis exercise Logistic regression

2. Reading the data and describing the dataset

alc <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/alc.csv", sep = ",", header = TRUE)

colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "guardian"   "traveltime" "studytime"  "schoolsup" 
## [16] "famsup"     "activities" "nursery"    "higher"     "internet"  
## [21] "romantic"   "famrel"     "freetime"   "goout"      "Dalc"      
## [26] "Walc"       "health"     "failures"   "paid"       "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

The dataset explores student achievement in secondary education of two Portuguese schools. The data was collected using school reports and questionnaires. The performance in mathematics and portuguese language have been measudred and in addition the data included attributes such as demographic, social and school related features. In the dataset variable ‘alc_use’ is the average of ‘Dalc’ and ‘Walc’variables. ’high_use’ is TRUE if ‘alc_use’ is higher than 2 and FALSE otherwise.

More detailed description of the dataset can be read at: https://archive.ics.uci.edu/ml/datasets/Student+Performance

Some of the answers are binary (eg. yes/no), and others numeric (eg. on a scale of 1-5) or nominal (eg. mother/father/other)

3. Relationships between alcohol consumption and other variables

I chose four variables: studytime, activities, health, goout

  • I hypothesize that if alcohol consumption is high, then a student would spend less time studying
  • I hypopthesize that if a student takes extra-curricular acitivities, their alcohol consumption is lower
  • I hypothesize that if students alcohol consumption is high, they rate their current health status as worse
  • I hypothesize that if students alcohol consumption is high, they go out with friends more

4. Exploring the distributions of the variables

Comment on your findings and compare the results of your exploration to your previously stated hypotheses.

library(tidyverse)
library(dplyr)
library(ggplot2)
library(readr)
library(gmodels)

## Cross tabulations
CrossTable(alc$high_use, alc$studytime)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## | Chi-square contribution |
## |           N / Row Total |
## |           N / Col Total |
## |         N / Table Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##              | alc$studytime 
## alc$high_use |         1 |         2 |         3 |         4 | Row Total | 
## -------------|-----------|-----------|-----------|-----------|-----------|
##        FALSE |        56 |       128 |        52 |        23 |       259 | 
##              |     2.314 |     0.017 |     2.381 |     0.889 |           | 
##              |     0.216 |     0.494 |     0.201 |     0.089 |     0.700 | 
##              |     0.571 |     0.692 |     0.867 |     0.852 |           | 
##              |     0.151 |     0.346 |     0.141 |     0.062 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|
##         TRUE |        42 |        57 |         8 |         4 |       111 | 
##              |     5.400 |     0.041 |     5.556 |     2.075 |           | 
##              |     0.378 |     0.514 |     0.072 |     0.036 |     0.300 | 
##              |     0.429 |     0.308 |     0.133 |     0.148 |           | 
##              |     0.114 |     0.154 |     0.022 |     0.011 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|
## Column Total |        98 |       185 |        60 |        27 |       370 | 
##              |     0.265 |     0.500 |     0.162 |     0.073 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|
## 
## 
CrossTable(alc$high_use, alc$activities)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## | Chi-square contribution |
## |           N / Row Total |
## |           N / Col Total |
## |         N / Table Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##              | alc$activities 
## alc$high_use |        no |       yes | Row Total | 
## -------------|-----------|-----------|-----------|
##        FALSE |       120 |       139 |       259 | 
##              |     0.224 |     0.210 |           | 
##              |     0.463 |     0.537 |     0.700 | 
##              |     0.670 |     0.728 |           | 
##              |     0.324 |     0.376 |           | 
## -------------|-----------|-----------|-----------|
##         TRUE |        59 |        52 |       111 | 
##              |     0.523 |     0.490 |           | 
##              |     0.532 |     0.468 |     0.300 | 
##              |     0.330 |     0.272 |           | 
##              |     0.159 |     0.141 |           | 
## -------------|-----------|-----------|-----------|
## Column Total |       179 |       191 |       370 | 
##              |     0.484 |     0.516 |           | 
## -------------|-----------|-----------|-----------|
## 
## 
CrossTable(alc$high_use, alc$health)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## | Chi-square contribution |
## |           N / Row Total |
## |           N / Col Total |
## |         N / Table Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##              | alc$health 
## alc$high_use |         1 |         2 |         3 |         4 |         5 | Row Total | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
##        FALSE |        35 |        28 |        61 |        45 |        90 |       259 | 
##              |     0.243 |     0.067 |     0.446 |     0.059 |     0.653 |           | 
##              |     0.135 |     0.108 |     0.236 |     0.174 |     0.347 |     0.700 | 
##              |     0.761 |     0.667 |     0.762 |     0.726 |     0.643 |           | 
##              |     0.095 |     0.076 |     0.165 |     0.122 |     0.243 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
##         TRUE |        11 |        14 |        19 |        17 |        50 |       111 | 
##              |     0.568 |     0.156 |     1.042 |     0.138 |     1.524 |           | 
##              |     0.099 |     0.126 |     0.171 |     0.153 |     0.450 |     0.300 | 
##              |     0.239 |     0.333 |     0.237 |     0.274 |     0.357 |           | 
##              |     0.030 |     0.038 |     0.051 |     0.046 |     0.135 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## Column Total |        46 |        42 |        80 |        62 |       140 |       370 | 
##              |     0.124 |     0.114 |     0.216 |     0.168 |     0.378 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## 
## 
CrossTable(alc$high_use, alc$goout)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## | Chi-square contribution |
## |           N / Row Total |
## |           N / Col Total |
## |         N / Table Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##              | alc$goout 
## alc$high_use |         1 |         2 |         3 |         4 |         5 | Row Total | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
##        FALSE |        19 |        82 |        97 |        40 |        21 |       259 | 
##              |     0.842 |     2.928 |     2.012 |     3.904 |     6.987 |           | 
##              |     0.073 |     0.317 |     0.375 |     0.154 |     0.081 |     0.700 | 
##              |     0.864 |     0.845 |     0.808 |     0.513 |     0.396 |           | 
##              |     0.051 |     0.222 |     0.262 |     0.108 |     0.057 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
##         TRUE |         3 |        15 |        23 |        38 |        32 |       111 | 
##              |     1.964 |     6.832 |     4.694 |     9.109 |    16.303 |           | 
##              |     0.027 |     0.135 |     0.207 |     0.342 |     0.288 |     0.300 | 
##              |     0.136 |     0.155 |     0.192 |     0.487 |     0.604 |           | 
##              |     0.008 |     0.041 |     0.062 |     0.103 |     0.086 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## Column Total |        22 |        97 |       120 |        78 |        53 |       370 | 
##              |     0.059 |     0.262 |     0.324 |     0.211 |     0.143 |           | 
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## 
## 

Bar plots

studytime_barplot <- ggplot(data = alc, aes(x=studytime, fill = high_use))
studytime_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")

activities_barplot <- ggplot(data = alc, aes(x=activities, fill = high_use))
activities_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")

health_barplot <- ggplot(data = alc, aes(x=health, fill = high_use))
health_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")

goout_barplot <- ggplot(data = alc, aes(x=goout, fill = high_use))
goout_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")

## Box plots

studytime_boxplot <- ggplot(alc, aes(x = high_use, y = studytime))

# define the plot as a boxplot and draw it
studytime_boxplot + geom_boxplot() 

activities_boxplot <- ggplot(alc, aes(x = high_use, y = activities))

activities_boxplot + geom_boxplot() 

Boxplot makes no sense in this case as activities is a binary variable

health_boxplot <- ggplot(alc, aes(x = high_use, y = health))

health_boxplot + geom_boxplot() 

goout_boxplot <- ggplot(alc, aes(x = high_use, y = goout))

goout_boxplot + geom_boxplot() 

There seems to be an association between high_use and goout variables as well as high_use and studytime supporting my initial hypotheses. There doesn’t seem to be an association between health and amount alcohol use and activities and alcohol use.

5. Logistic regression

Presenting and interpreting summary of the fitted model

Fitting the model

m <- glm(high_use ~ studytime + activities + health + goout, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ studytime + activities + health + goout, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.7878  -0.7844  -0.5507   0.9040   2.6331  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   -2.54832    0.63687  -4.001  6.3e-05 ***
## studytime     -0.58926    0.16676  -3.534  0.00041 ***
## activitiesyes -0.30072    0.25281  -1.190  0.23423    
## health         0.14004    0.09029   1.551  0.12090    
## goout          0.76189    0.11894   6.406  1.5e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 452.04  on 369  degrees of freedom
## Residual deviance: 383.67  on 365  degrees of freedom
## AIC: 393.67
## 
## Number of Fisher Scoring iterations: 4

The variables I thought were associated with high_use (studytime and goout) based on visual observation seem to be statistically significant as well based on this summary. For studytime there seems to be a negative association with high_use, meaning if the alcohol use is high, the probability of studying less is higher. High alcol use and going out with friends seem to be positively associated meaning that if the alcohol use is high, the probability of going more out with friends is higher.

###Presenting and interpreting the coefficients of the model as odds ratios

# compute odds ratios (OR)
OR <- coef(m) %>% exp

# compute confidence intervals (CI)
CI <- confint(m)%>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI) 
##                       OR      2.5 %    97.5 %
## (Intercept)   0.07821325 0.02178097 0.2659023
## studytime     0.55473498 0.39563591 0.7621319
## activitiesyes 0.74028531 0.44966067 1.2138630
## health        1.15032068 0.96559347 1.3768670
## goout         2.14231738 1.70763719 2.7248784

In activities and health the confidence interval crosses 1, which means there is no association If the alcohol use is high, the probability of going out with friends is 2,14x higher. So none of the variables has very high odds ratio.

6. Exploring the predictive power of my model

 # fit the model
m <- glm(high_use ~ studytime + goout, data = alc, family = "binomial")

# predict() the probability of high_use
probabilities <- predict(m, type = "response")

library(dplyr)
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)

# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)

# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction) %>% addmargins()
##         prediction
## high_use FALSE TRUE Sum
##    FALSE   238   21 259
##    TRUE     70   41 111
##    Sum     308   62 370

This means that my model of two statistically significant variables is not great at predicting high alcohol use. Other predictors would need to be added to create more accurate prediction. Among all predictions 91 were inaccurate.

Computing the training error

# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2459459

Computing average number of wrong predictions by random guess (uniform distribution from 0.0 to 1.0)

alc <- mutate(alc, random_guess = runif(n()))
loss_func(class = alc$high_use, prob = alc$random_guess)
## [1] 0.5081081
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = nrow(alc))

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2459459

This tells that 25% of predictions are wrong in this model. Random guess was wrong about 50% of the time and my model was wrong 25% of the time

7. BONUS

# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = nrow(alc))

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2459459
#10-fold cross validation
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv$delta[1]
## [1] 0.2459459

There was no big difference between K-fold cross-validation and 10-fold cross-validation. My model is slightly better than the model introduced in the exercise set. ***

date()
## [1] "Mon Dec 12 21:35:48 2022"

Chapter 4. Analysis exercise Clustering and classification

library(tidyverse)
#install.packages(c("MASS", "corrplot"))

Loading the Boston data from the MASS package

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
data("Boston")

2. Exploring the structure and the dimensions of the data and describing the dataset

str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14

This is a dataset already loaded in R that is used often for teaching purposes. The dataset has 506 observations of 14 variables, 506 rows and 14 columns. The dataset explores Housing Values in Suburbs of Boston. The variables include for example per capita crime rate by town, weighted mean of distances to five Boston employment centres and pupil-teacher ratio by town. More about the dataset can be read here: https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html

3. Graphical overview and summaries of the variables

library(corrplot)
## corrplot 0.92 loaded
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00
pairs(Boston)

corr_boston <- cor(Boston)
corrplot(corr_boston, method="circle", type = "upper", cl.pos = "r", tl.pos = "d", tl.cex = 0.7)

Describing and interpreting the outputs, commenting on the distributions of the variables and the relationships between them:

  • crim (per capita crime rate by town)- the mean (3.61352)and median (0.25651) are low, but there are probably a few cases where the crime rate is very high as the range is from 0.00632 to 88.97620
  • zn (proportion of residential land zoned for lots over 25,000 sq.ft.)The range is from 0 to 100. Mean is 11.36 and median 0 so most of the answers are in the lower range of the scale
  • indus (proportion of non-retail business acres per town.)THe range is from 0.26 to 27.74. Mean and median are close to eachother but skewed a bit towards lower end of the range.
  • chas (Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).) Mean: 0.06917, meaning it is more common that tract does not bound river *nox (nitrogen oxides concentration (parts per 10 million).)The range is from 0.3850 to 0.8710 and mean is 0.5547, median almost the same. Slightly skewed towards lower nitrogen concentration.
  • rm (average number of rooms per dwelling.)Range: 3.561-8.780, mean 6.285, median almost the same.
  • age (proportion of owner-occupied units built prior to 1940.)Range 2.90-100. Mean: 68.57, the first quartile is also over 45, so most cities seem to have a rather high proportion of owner-occupied units built prior to 1940.
  • dis (weighted mean of distances to five Boston employment centres.)Range 1.130-12.127, mean and median a bit over 3, so most cities are quite close to the employment centres
  • rad (index of accessibility to radial highways.)Range 1-24. Mean 9.549, Median 5. Skewed towards the lower range of the index.
  • tax (full-value property-tax rate per $10,000.) Range 187-711, Mean 408.2, Quite equally distrebuted
  • black1000(Bk - 0.63)^21000(Bk−0.63) (2 where BkBk is the proportion of blacks by town.) Range 0.32 -396.9. Already first quartile is 375.38 so there are only a few findings at the lower range of the scale.
  • lstat (lower status of the population (percent).)Range 1,73-37.97. 3rd quartile is 16.95 so most of the findings are at the lower range of the scale
  • medv (median value of owner-occupied homes in $1000s.)Range: 5-50, mean 22.53 - quite equally distributed

Relationships between the variables:

The highest positive correlation is between rad and tax (better access to radial highways is correlated with higher property tax rate, makes sense!). High negative correlations are between age and dis, lstat and med, dis and nox (this has highest correlation - the farther away you are from employment centers the less nitroxide oxid there is - makes sense!) as well as indus and dis.

4. Standardizing the dataset, printing out summaries of the scaled data

boston_scaled <- scale(Boston)
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
class(boston_scaled)
## [1] "matrix" "array"
boston_scaled <- as.data.frame(boston_scaled)

The standard score or Z-score is the number of standard deviations by which the value of a raw score is above or below the mean value of what is being observed or measured. Scale function does this -> the variables become more similar and easier to compare

Creating a categorical variable

boston_scaled$crim <- as.numeric(boston_scaled$crim)
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)

Dividing the dataset to train and test sets

n <- nrow(boston_scaled)
ind <- sample(n,  size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
correct_classes <- test$crime

test <- dplyr::select(test, -crime)
correct_classes
##   [1] low      med_low  med_high med_high med_high med_high med_high med_high
##   [9] low      med_low  low      med_low  med_low  low      med_low  med_low 
##  [17] med_low  med_low  low      low      low      low      med_low  med_low 
##  [25] low      med_low  med_high med_high med_high med_high med_high med_high
##  [33] med_high med_high low      low      low      low      low      low     
##  [41] med_low  med_low  med_high med_low  low      med_high med_high med_low 
##  [49] med_low  med_high med_low  med_low  low      low      med_low  low     
##  [57] med_low  med_high med_high med_high med_low  low      low      low     
##  [65] low      low      high     high     high     high     high     high    
##  [73] high     high     high     high     high     high     high     high    
##  [81] high     high     high     high     high     high     high     high    
##  [89] high     high     high     high     high     high     high     high    
##  [97] high     high     med_low  med_low  med_high low     
## Levels: low med_low med_high high

5. Fitting the linear discriminant analysis on the train set.

lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2524752 0.2549505 0.2574257 0.2351485 
## 
## Group means:
##                   zn      indus        chas        nox         rm        age
## low       0.95481880 -0.9331130 -0.11793298 -0.9031749  0.4561976 -0.8961948
## med_low  -0.09114703 -0.3409746 -0.08120770 -0.5526158 -0.1482438 -0.3094211
## med_high -0.37015262  0.1743085  0.18195173  0.4238317  0.1757854  0.4011333
## high     -0.48724019  1.0172655 -0.02367011  1.0709996 -0.4461854  0.7987551
##                 dis        rad        tax     ptratio       black       lstat
## low       0.9022602 -0.6902506 -0.7324383 -0.47080126  0.38057343 -0.77529042
## med_low   0.3437652 -0.5436697 -0.4837992 -0.07408407  0.31950614 -0.13926775
## med_high -0.3880631 -0.3811346 -0.2988194 -0.39695214  0.09644475 -0.03310833
## high     -0.8616373  1.6366336  1.5129868  0.77903654 -0.59331671  0.88519789
##                  medv
## low       0.519082569
## med_low   0.006355893
## med_high  0.267525830
## high     -0.672427452
## 
## Coefficients of linear discriminants:
##                 LD1         LD2          LD3
## zn       0.08238913  0.70492021 -0.927669391
## indus    0.02081688 -0.31911804  0.107620917
## chas    -0.08762037 -0.01359411  0.010021961
## nox      0.39889253 -0.68730586 -1.308749453
## rm      -0.07572526 -0.11061297 -0.265938212
## age      0.27315117 -0.27015275 -0.006347675
## dis     -0.06150909 -0.18670195  0.023427586
## rad      2.94750983  0.86625360 -0.185383060
## tax      0.01236498  0.09867010  0.591106207
## ptratio  0.10108288  0.04749678 -0.208791134
## black   -0.14449105  0.04233960  0.130585336
## lstat    0.20428700 -0.20882259  0.333548087
## medv     0.17461143 -0.41782867 -0.134123217
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9393 0.0465 0.0141
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

classes <- as.numeric(train$crime)

plot(lda.fit, dimen = 2)
lda.arrows(lda.fit, myscale = 1)

6. Saving the crime categories and predicting the classes with the LDA model

I had saved the crime categories already before

lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       12      11        2    0
##   med_low    6      16        1    0
##   med_high   0      12       10    0
##   high       0       0        0   32

The model mostly predicts correctly

7. Reloading the Boston dataset and standardizing the dataset, calculating the distances between the observations

library(MASS)
data("Boston")

boston_scaled <- scale(Boston)
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
class(boston_scaled)
## [1] "matrix" "array"
boston_scaled <- as.data.frame(boston_scaled)


dist_eu <- dist(boston_scaled)

summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
dist_man <- dist(boston_scaled, method = "manhattan")

Running k-means algorithm on the dataset.

set.seed(123)
k_max <- 10

twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')

km <- kmeans(boston_scaled, centers = 6)

Around 6 clusters seems optimal.

pairs(boston_scaled, col = boston_scaled$cluster)

HUOM! Tässä vinkkejä kuvaajan saamiseen paremmin luettavaksi :)

pairs(Boston, gap = 0.5, oma = c(0, 0, 0, 0), pch = 20)

  • ‘gap’ sets the space between subplots (defaults to 1)
  • ‘oma’ sets the outer margins of the plot
  • Different numbers for ‘pch’ generate different points in the plots. For example ‘pch = 20’ results in slightly smaller points than the default.

This is a very busy plot so it is hard to read - I think I followed the instructions so I don’t know how to fix it - I gave up with interpreting the results

Bonus

library(MASS)
library(ggplot2)

set.seed(123)

# load and scale data
data("Boston")
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)

# perform k-means with 3 clusters. add cluster as a new column
km <- kmeans(Boston, centers = 3)
boston_scaled$cluster <- km$cluster

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# fit the model and plot 
lda.fit = lda(cluster ~ ., data=boston_scaled)
plot(lda.fit, dimen = 2)
lda.arrows(lda.fit, myscale = 2)

Vectors rad, tax and black are clearly visible and the are strongest determinants in dividing variables in different clusters. Vectors rad ja tax point almost to the same direction which means that they don’t have independent prediction power but as seen earlier they are correlated they might work in combination. Vector black goes to another direction, so it seems it is independent of the others.

Superbonus

scale data and cluster it. we need at least 4 clusters for the 3D plot and fit the model

boston_scaled  <- as.data.frame(scale(Boston))
km <- kmeans(boston_scaled, centers = 4)

boston_scaled$cluster <- km$cluster
lda.fit <- lda(cluster ~ ., data = boston_scaled)

# add categorical variable "crime"
bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)

# create train and test data
ind <- sample(nrow(boston_scaled),  size = nrow(boston_scaled) * 0.9)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]

# select predictors
model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 455  14
dim(lda.fit$scaling)
## [1] 14  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

# plot 3D, color set to crime classes of train data
library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)
# plot 3D, color set to clusters of train data
train$cluster <- as.factor(train$cluster)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$cluster)

In the second one the colour is set according to the crime rate and in the first one clusters of the k-means. In many parts these are similar though (cluster1 seems to include the same observations as the ones with the high crime rate) ***

date()
## [1] "Mon Dec 12 21:35:53 2022"

Chapter 5 Dimensionality reduction techniques

library(tidyverse)
library(tidyr)
library(stringr)
library(dplyr)
library(GGally)

human <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/human2.txt", 
                    sep =",", header = T)

1. Showing graphical overview and summaries of the variables of the data

str(human)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
dim (human)
## [1] 155   8
summary(human)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50
library(corrplot)
summary(human)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50
pairs(human)

corr_human <- cor(human)
corrplot(corr_human, method="circle", type = "upper", cl.pos = "r", tl.pos = "d", tl.cex = 0.7)

This graph is not very readable so I tried another one that I learned last week from one of the assignments I assessed

color_correlation <- function(data, mapping, method="p", use="pairwise", ...){
    # Function by user20650 on Stackoverflow (https://stackoverflow.com/a/53685979)
    # grab data
    x <- eval_data_col(data, mapping$x)
    y <- eval_data_col(data, mapping$y)

    # calculate correlation
    corr <- cor(x, y, method=method, use=use)

    # calculate colour based on correlation value
    # Here I have set a correlation of minus one to blue, 
    # zero to white, and one to red 
    # Change this to suit: possibly extend to add as an argument of `my_fn`
    colFn <- colorRampPalette(c("blue", "white", "red"), interpolate ='spline')
    fill <- colFn(100)[findInterval(corr, seq(-1, 1, length=100))]

    ggally_cor(data = data, mapping = mapping, ...) + 
      theme_void() +
      theme(panel.background = element_rect(fill=fill))
  }


ggpairs(
  data = human,
  upper = list(continuous = color_correlation),
  lower = list(continuous = wrap("points", alpha = 0.3, size=0.3)),
)

Most of the variables don’t seem to be normally distributed. Strongest positive correlations seem to be between Life.Exp and Edu.Exp and Ado.Birth and Mat.Mor. Strongest negative correlation seems to be between Mat.Mor and Life.Exp.

2. Performing principal component analysis on the raw data

pca_human <- prcomp(human)

# creating a summary of pca_human
s <- summary(pca_human)

# rounded percentanges of variance captured by each PC
pca_pr <- round(100*s$importance[2, ], digits = 3)

# print out the percentages of variance
pca_pr
##   PC1   PC2   PC3   PC4   PC5   PC6   PC7   PC8 
## 99.99  0.01  0.00  0.00  0.00  0.00  0.00  0.00
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

Now that the variables are not standardized, PC1 seems to capture 99,99% of the variability

3. Standardizing the variables and repeating principal component analysis

# standardize the variables
human_std <- scale(human)

# print out summaries of the standardized variables
summary(human_std)
##     Edu2.FM           Labo.FM           Edu.Exp           Life.Exp      
##  Min.   :-2.8189   Min.   :-2.6247   Min.   :-2.7378   Min.   :-2.7188  
##  1st Qu.:-0.5233   1st Qu.:-0.5484   1st Qu.:-0.6782   1st Qu.:-0.6425  
##  Median : 0.3503   Median : 0.2316   Median : 0.1140   Median : 0.3056  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5958   3rd Qu.: 0.7350   3rd Qu.: 0.7126   3rd Qu.: 0.6717  
##  Max.   : 2.6646   Max.   : 1.6632   Max.   : 2.4730   Max.   : 1.4218  
##       GNI             Mat.Mor          Ado.Birth          Parli.F       
##  Min.   :-0.9193   Min.   :-0.6992   Min.   :-1.1325   Min.   :-1.8203  
##  1st Qu.:-0.7243   1st Qu.:-0.6496   1st Qu.:-0.8394   1st Qu.:-0.7409  
##  Median :-0.3013   Median :-0.4726   Median :-0.3298   Median :-0.1403  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.3712   3rd Qu.: 0.1932   3rd Qu.: 0.6030   3rd Qu.: 0.6127  
##  Max.   : 5.6890   Max.   : 4.4899   Max.   : 3.8344   Max.   : 3.1850
pca_human <- prcomp(human_std)

# creating a summary of pca_human
s <- summary(pca_human)

# rounded percentanges of variance captured by each PC
pca_pr <- round(100*s$importance[2, ], digits = 3)

# print out the percentages of variance
pca_pr
##    PC1    PC2    PC3    PC4    PC5    PC6    PC7    PC8 
## 53.605 16.237  9.571  7.583  5.477  3.595  2.634  1.298
# create object pc_lab to be used as axis labels
pc_lab <- paste0(c("Health and knowledge","Empowerment"), " (", pca_pr, "%)")

# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])

4. Interpretations of the first two principal component dimensions on the standardized human data

The results are very different. In the first biplot almost all of the variability was captured by PC1 and now 53,605% and PC2 16.237%. They are different because after standardization the variables are better comparable with eachother as the scales are more similar. Pc 2 describes womens participation in working life and politics. For example In some of the islamic countries womens participation is very low. PC 1 describes expected years of schooling, life expectancy, Gross national income per capita. maternal mortality ratio and adolecscent birth rate are negatively correlated with the other variables such as education and life expectancy. On the left there are western countries and on the right poorer, for example African countries.

5. Loading tea dataset and converting its character variables to factors

tea <- read.csv("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/tea.csv", stringsAsFactors = TRUE)

Exploring the structure and dimensions of tea data.

str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "+60","15-24",..: 4 5 5 2 5 2 4 4 4 4 ...
##  $ frequency       : Factor w/ 4 levels "+2/day","1 to 2/week",..: 3 3 1 3 1 3 4 2 1 1 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
dim(tea)
## [1] 300  36

The data has 300 observations of 36 variables, 300 rows and 36 columns

Browzing the contents and visualizing the data

View(tea)

Using MCA on the tea data and drewing variable biplot of the analysis

library(FactoMineR)
library(dplyr)
library(tidyr)

# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

# select the 'keep_columns' to create a new dataset
tea_time <- select(tea, one_of(keep_columns))

# look at the summaries and structure of the data
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.279   0.261   0.219   0.189   0.177   0.156   0.144
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519   7.841
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953  77.794
##                        Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.141   0.117   0.087   0.062
## % of var.              7.705   6.392   4.724   3.385
## Cumulative % of var.  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139   0.003
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626   0.027
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111   0.107
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841   0.127
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979   0.035
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990   0.020
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347   0.102
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459   0.161
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968   0.478
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898   0.141
##                     v.test     Dim.3     ctr    cos2  v.test  
## black                0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            2.867 |   0.433   9.160   0.338  10.053 |
## green               -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone               -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                3.226 |   1.329  14.771   0.218   8.081 |
## milk                 2.422 |   0.013   0.003   0.000   0.116 |
## other                5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag             -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged          -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualing MCA

plot(mca, invisible=c("ind"), graph.type = "classic", habillage = "quali")

Dimension 1 seems to describe how sophisticated the tea use is. On the right hand side of the map there is unpacked tea bought from a teashop and on the left side teabags from chainstore. With these variables, it is hard to understand what kind of phenomenon is behind Dim 2. ***

date()
## [1] "Mon Dec 12 21:36:00 2022"

Chapter 6 Analysis of longitudinal data

Reading the data and doing the stuff from data wrangling exercise (converting the variables, and converting the data sets to long forms)

library(tidyverse)
library(ggplot2)

BPRS <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/BPRS.txt", sep =" ", header = T)
RATS <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/rats.txt", sep ="\t", header = T)

BPRS$treatment <- factor(BPRS$treatment)
BPRS$subject <- factor(BPRS$subject)

RATS$ID <- factor(RATS$ID)
RATS$Group <- factor(RATS$Group)

BPRSL <-  pivot_longer(BPRS, cols = -c(treatment, subject), names_to = "weeks", values_to = "bprs") %>%
  arrange(weeks) %>% mutate(week = as.integer(substr(weeks,5,5)))

RATSL <- pivot_longer(RATS, cols=-c(ID,Group), names_to = "WD",values_to = "Weight")  %>%  
  mutate(Time = as.integer(substr(WD,3,4))) %>% arrange(Time)

Working with RATSL data, graphical displays of longitudinal data.

# Draw the plot
ggplot(RATSL, aes(x = Time, y = Weight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(RATSL$Weight), max(RATSL$Weight)))

Group 1 seems to have a lot lower weight than the other two groups

Standardizing the data and visualizing individual weight profiles

RATSL <- RATSL %>%
  group_by(Time) %>%
  mutate( stdweight = (Weight - mean(Weight))/sd(Weight) ) %>%
  ungroup()

# Draw the plot
ggplot(RATSL, aes(x = Time, y = stdweight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(RATSL$stdweight), max(RATSL$stdweight)))

Creating mean weight profiles for three groups

library(plyr)
## ------------------------------------------------------------------------------
## You have loaded plyr after dplyr - this is likely to cause problems.
## If you need functions from both plyr and dplyr, please load plyr first, then dplyr:
## library(plyr); library(dplyr)
## ------------------------------------------------------------------------------
## 
## Attaching package: 'plyr'
## The following objects are masked from 'package:plotly':
## 
##     arrange, mutate, rename, summarise
## The following objects are masked from 'package:dplyr':
## 
##     arrange, count, desc, failwith, id, mutate, rename, summarise,
##     summarize
## The following object is masked from 'package:purrr':
## 
##     compact
# Summary data with mean and standard error of RATSL by treatment and week 
RATSS <- RATSL %>% ddply(c("Group", "Time"), summarise, mean = mean(Weight), N = length(Weight), se = sd(Weight) / sqrt(N))
# Couldn't make the group_by(), summarise(), ungroup() method in exercise work, decided to use this alternative method

ggplot(RATSS, aes(x = Time, y = mean, linetype = Group, shape = Group)) +
  geom_line() +
  scale_linetype_manual(values = c(1,2,3)) +
  geom_point(size=3) +
  scale_shape_manual(values = c(1,2,4)) +
  geom_errorbar(aes(ymin=mean-se, ymax=mean+se, linetype="1"), width=0.3) +
  theme(legend.position = c(0.8,0.5)) +
  scale_y_continuous(name = "mean(Weight) +/- se(Weight)")

In every group the weight is going slightly up during the study period. Group 2 has largest standard errors of all three groups, in group 1 the error seems to be minimal

Graphing side-by-side box plots of the observations at each time point

ggplot(RATSL, aes(x = factor(Time), y = Weight, fill = Group )) +
  geom_boxplot() +
  theme_bw() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank()) +  
  scale_x_discrete(name = "Time") +
  scale_y_continuous(name = "Weight", limits = c(min(RATSL$Weight), max(RATSL$Weight)))

Drawing boxplots of the weights of each group

ggplot(RATSS, aes(x = Group, y = mean)) +
  geom_boxplot() +
  stat_summary(fun = "mean", geom = "point", shape=23, size=4, fill = "white") +
  scale_y_continuous(name = "mean(Weight)")

I did not find any outliers so I did not filter them out :)

And then I ran out of time. This was very difficult!